One of the morals, yes. (Though I’d debate “almost certainly”. Safer would be “the power to torture acausally implies the power to torture causally”.)
Not always and everywhere: you could be causally safe from an AI whose future lightcone doesn’t intersect your own, but not acausally safe.
For practical purposes, you’re almost certainly right.
One of the morals, yes. (Though I’d debate “almost certainly”. Safer would be “the power to torture acausally implies the power to torture causally”.)
Not always and everywhere: you could be causally safe from an AI whose future lightcone doesn’t intersect your own, but not acausally safe.
For practical purposes, you’re almost certainly right.